16 research outputs found
Learning Latent Space Dynamics for Tactile Servoing
To achieve a dexterous robotic manipulation, we need to endow our robot with
tactile feedback capability, i.e. the ability to drive action based on tactile
sensing. In this paper, we specifically address the challenge of tactile
servoing, i.e. given the current tactile sensing and a target/goal tactile
sensing --memorized from a successful task execution in the past-- what is the
action that will bring the current tactile sensing to move closer towards the
target tactile sensing at the next time step. We develop a data-driven approach
to acquire a dynamics model for tactile servoing by learning from
demonstration. Moreover, our method represents the tactile sensing information
as to lie on a surface --or a 2D manifold-- and perform a manifold learning,
making it applicable to any tactile skin geometry. We evaluate our method on a
contact point tracking task using a robot equipped with a tactile finger. A
video demonstrating our approach can be seen in https://youtu.be/0QK0-Vx7WkIComment: Accepted to be published at the International Conference on Robotics
and Automation (ICRA) 2019. The final version for publication at ICRA 2019 is
7 pages (i.e. 6 pages of technical content (including text, figures, tables,
acknowledgement, etc.) and 1 page of the Bibliography/References), while this
arXiv version is 8 pages (added Appendix and some extra details
Meta-Policy Learning over Plan Ensembles for Robust Articulated Object Manipulation
Recent work has shown that complex manipulation skills, such as pushing or
pouring, can be learned through state-of-the-art learning based techniques,
such as Reinforcement Learning (RL). However, these methods often have high
sample-complexity, are susceptible to domain changes, and produce unsafe
motions that a robot should not perform. On the other hand, purely geometric
model-based planning can produce complex behaviors that satisfy all the
geometric constraints of the robot but might not be dynamically feasible for a
given environment. In this work, we leverage a geometric model-based planner to
build a mixture of path-policies on which a task-specific meta-policy can be
learned to complete the task. In our results, we demonstrate that a successful
meta-policy can be learned to push a door, while requiring little data and
being robust to model uncertainty of the environment. We tested our method on a
7-DOF Franka-Emika Robot pushing a cabinet door in simulation.Comment: 5 pages, Workshop on Learning for Task and Motion Planning (RSS2023